Loading...

Generative AI Adoption and Impact on Fraud

by Alex Lvoff, Janine Movish 6 min read August 24, 2023

“Grandma, it’s me, Mike.”

Imagine hearing the voice of a loved one (or what sounds like it) informing you they were arrested and in need of bail money.

Panicked, a desperate family member may follow instructions to withdraw a large sum of money to provide to a courier. Suspicious, they even make a video call to which they see a blurry image on the other end, but the same voice.

When the fight or flight feeling settles, reality hits.

Sadly, this is not the scenario of an upcoming Netflix movie. This is fraud – an example of a new grandparent scam/family emergency scam happening at scale across the U.S.

While generative AI is driving efficiencies, personalization and improvements in multiple areas, it’s also a technology being adopted by fraudsters. Generative AI can be used to create highly personalized and convincing messages that are tailored to a specific victim. By analyzing publicly available social media profiles and other personal information, scammers can use generative AI to create fake accounts, emails, or phone calls that mimic the voice and mannerisms of a grandchild or family member in distress. The use of this technology can make it particularly difficult to distinguish between real and fake communication, leading to increased vulnerability and susceptibility to fraud.

Furthermore, generative AI can also be used to create deepfake videos or audio recordings that show the supposed family member in distress or reinforce the scammer’s story. These deepfakes can be incredibly realistic, making it even harder for victims to identify fraudulent activity.

What is Generative AI?

Generative artificial intelligence (GenAI) describes algorithms that can be used to create new content, including audio, code, images, text, simulations, and videos. Generative AI has the potential to revolutionize many industries by creating new and innovative content, but it also presents a significant risk for financial institutions. Cyber attackers can use generative AI to produce sophisticated malware, phishing schemes, and other fraudulent activities that can cause data breaches, financial losses, and reputational damage.

This poses a challenge for financial organizations, as human error remains one of the weakest links in cybersecurity. Fraudsters capitalizing on emotions such as fear, stress, desperation, or inattention can make it difficult to protect against malicious content generated by generative AI, which could be used as a tactic to defraud financial institutions.

Four types of Generative AI used for Fraud:

Fraud automation at scale
Fraudulent activities often involve multiple steps which can be complex and time-consuming. However, GenAI may enable fraudsters to automate each of these steps, thereby establishing a comprehensive framework for fraudulent attacks. The modus operandi of GenAI involves the generation of scripts or code that facilitates the creation of programs capable of autonomously pilfering personal data and breaching accounts. Previously, the development of such codes and programs necessitated the expertise of seasoned programmers, with each stage of the process requiring separate and fragmented development. Nevertheless, with the advent of GenAI, any fraudster can now access an all-encompassing program without the need for specialized knowledge, amplifying the inherent danger it poses. It can be used to accelerate fraudsters techniques such as credential stuffing, card testing and brute force attacks.

Text content generation
In the past, one could often rely on spotting typos or errors as a means of detecting such fraudulent schemes. However, the emergence of GenAI has introduced a new challenge, as it generates impeccably written scripts that possess an uncanny authenticity, rendering the identification of deceit activities considerably more difficult. But now, GenAI can produce realistic text that sounds as if it were from a familiar person, organization, or business by simply feeding GenAI prompts or content to replicate. Furthermore, the utilization of innovative Language Learning Model (LLM) tools enables scammers to engage in text-based conversations with multiple victims, skillfully manipulating them into carrying out actions that ultimately serve the perpetrators’ interests.

Image and video manipulation
In a matter of seconds, fraudsters, regardless of their level of expertise, are now capable of producing highly authentic videos or images powered by GenAI. This innovative technology leverages deep learning techniques, using vast amounts of collected datasets to train artificial intelligence models. Once these models are trained, they possess the ability to generate visuals that closely resemble the desired target. By seamlessly blending or superimposing these generated images onto specific frames, the original content can be replaced with manipulated visuals. Furthermore, the utilization of AI text-to-image generators, powered by artificial neural networks, allows fraudsters to input prompts in the form of words. These prompts are then processed by the system, resulting in the generation of corresponding images, further enhancing the deceptive capabilities at their disposal.

Human voice generation
The emergence of AI-generated voices that mimic real people has created new vulnerabilities in voice verification systems. Firms that rely heavily on these systems, such as investment firms, must take extra precautions to ensure the security of their clients’ assets.

Criminals can also use AI chatbots to build relationships with victims and exploit their emotions to convince them to invest money or share personal information. Pig butchering scams and romance scams are examples of these types of frauds where AI chatbots can be highly effective, as they are friendly, convincing, and can easily follow a script.

In particular, synthetic identity fraud has become an increasingly common tactic among cybercriminals. By creating fake personas with plausible social profiles, hackers can avoid detection while conducting financial crimes.

It is essential for organizations to remain vigilant and verify the identities of any new contacts or suppliers before engaging with them. Failure to do so could result in significant monetary loss and reputational damage.

Leverage AI to fight bad actors

In today’s digital landscape, businesses face increased fraud risks from advanced chatbots and generative technology. To combat this, businesses must use the same weapons than criminals, and train AI-based tools to detect and prevent fraudulent activities.

Fraud prediction: Generative AI can analyze historical data to predict future fraudulent activities. By analyzing patterns in data and identifying potential risk factors, generative AI can help fraud examiners anticipate and prevent fraudulent behavior. Machine learning algorithms can analyze patterns in data to identify suspicious behavior and flag it for further investigation.

Fraud Investigation: In addition to preventing fraud, generative AI can assist fraud examiners in investigating suspicious activities by generating scenarios and identifying potential suspects. By analyzing email communications and social media activity, generative AI can uncover hidden connections between suspects and identify potential fraudsters.

To confirm the authenticity of users, financial institutions should adopt sophisticated identity verification methods that include liveness detection algorithms and document-centric identity proofing, and predictive analytics models.

These measures can help prevent bots from infiltrating their systems and spreading disinformation, while also protecting against scams and cyberattacks.

In conclusion, financial institutions must stay vigilant and deploy new tools and technologies to protect against the evolving threat landscape. By adopting advanced identity verification solutions, organizations can safeguard themselves and their customers from potential risks.

To learn more about how Experian can help you leverage fraud prevention solutions, visit us online or request a call

Related Posts

For many banks, first-party fraud has become a silent drain on profitability. On paper, it often looks like classic credit risk: an account books, goes delinquent, and ultimately charges off. But a growing share of those early charge-offs is driven by something else entirely: customers who never intended to pay you back. That distinction matters. When first-party fraud is misclassified as credit risk, banks risk overstating credit loss, understating fraud exposure, and missing opportunities to intervene earlier.  In our recent Consumer Banker Association (CBA) partner webinar, “Fraud or Financial Distress? How to Differentiate Fraud and Credit Risk Early,” Experian shared new data and analytics to help fraud, risk and collections leaders see this problem more clearly. This post summarizes key themes from the webinar and points you to the full report and on-demand webinar for deeper insight. Why first-party fraud is a growing issue for banks  Banks are seeing rising early losses, especially in digital channels. But those losses do not always behave like traditional credit deterioration. Several trends are contributing:  More accounts opened and funded digitally  Increased use of synthetic or manipulated identities  Economic pressure on consumers and small businesses  More sophisticated misuse of legitimate credentials  When these patterns are lumped into credit risk, banks can experience:  Inflation of credit loss estimates and reserves  Underinvestment in fraud controls and analytics  Blurred visibility into what is truly driving performance   Treating first-party fraud as a distinct problem is the first step toward solving it.  First-payment default: a clearer view of intent  Traditional credit models are designed to answer, “Can this customer pay?” and “How likely are they to roll into delinquency over time?” They are not designed to answer, “Did this customer ever intend to pay?” To help banks get closer to that question, Experian uses first-payment default (FPD) as a key indicator. At a high level, FPD focuses on accounts that become seriously delinquent early in their lifecycle and do not meaningfully recover.  The principle is straightforward:  A legitimate borrower under stress is more likely to miss payments later, with periods of cure and relapse.  A first-party fraudster is more likely to default quickly and never get back on track.  By focusing on FPD patterns, banks can start to separate cases that look like genuine financial distress from those that are more consistent with deceptive intent.  The full report explains how FPD is defined, how it varies by product, and how it can be used to sharpen bank fraud and credit strategies. Beyond FPD: building a richer fraud signal  FPD alone is not enough to classify first-party fraud. In practice, leading banks are layering FPD with behavioral, application and identity indicators to build a more reliable picture. At a conceptual level, these indicators can include:  Early delinquency and straight-roll behavior  Utilization and credit mix that do not align with stated profile  Unusual income, employment, or application characteristics High-risk channels, devices, or locations at application Patterns of disputes or behaviors that suggest abuse  The power comes from how these signals interact, not from any one data point. The report and webinar walk through how these indicators can be combined into fraud analytics and how they perform across key banking products.  Why it matters across fraud, credit and collections Getting first-party fraud right is not just about fraud loss. It impacts multiple parts of the bank. Fraud strategy Well-defined quantification of first-party fraud helps fraud leaders make the case for investments in identity verification, device intelligence, and other early lifecycle controls, especially in digital account opening and digital lending. Credit risk and capital planning When fraud and credit losses are blended, credit models and reserves can be distorted. Separating first-party fraud provides risk teams a cleaner view of true credit performance and supports better capital planning.  Collections and customer treatment Customers in genuine financial distress need different treatment paths than those who never intended to pay. Better segmentation supports more appropriate outreach, hardship programs, and collections strategies, while reserving firmer actions for abuse.  Executive and board reporting Leadership teams increasingly want to understand what portion of loss is being driven by fraud versus credit. Credible data improves discussions around risk appetite and return on capital.  What leading banks are doing differently  In our work with financial institutions, several common practices have emerged among banks that are getting ahead of first-party fraud: 1. Defining first-party fraud explicitly They establish clear definitions and tracking for first-party fraud across key products instead of leaving it buried in credit loss categories.  2. Embedding FPD segmentation into analytics They use FPD-based views in their monitoring and reporting, particularly in the first 6–12 months on book, to better understand early loss behavior.  3. Unifying fraud and credit decisioning Rather than separate strategies that may conflict, they adopt a more unified decisioning framework that considers both fraud and credit risk when approving accounts, setting limits and managing exposure.  4. Leveraging identity and device data They bring in noncredit data — identity risk, device intelligence, application behavior — to complement traditional credit information and strengthen models.  5. Benchmarking performance against peers They use external benchmarks for first-party fraud loss rates and incident sizes to calibrate their risk posture and investment decisions.  The post is meant as a high-level overview. The real value for your teams will be in the detailed benchmarks, charts and examples in the full report and the discussion in the webinar.  If your teams are asking whether rising early losses are driven by fraud or financial distress, this is the moment to look deeper at first-party fraud.  Download the report: “First-party fraud: The most common culprit”  Explore detailed benchmarks for first-party fraud across banking products, see how first-payment default and other indicators are defined and applied, and review examples you can bring into your own internal discussions.  Download the report Watch the on-demand CBA webinar: “Fraud or Financial Distress? How to Differentiate Fraud and Credit Risk Early”  Hear Experian experts walk through real bank scenarios, FPD analytics and practical steps for integrating first-party fraud intelligence into your fraud, credit, and collections strategies.  Watch the webinar First-party fraud is likely already embedded in your early credit losses. With the right analytics and definitions, banks can uncover the true drivers, reduce hidden fraud exposure, and better support customers facing genuine financial hardship.

by Brittany Ennis 6 min read February 12, 2026

Discover why Experian’s unified fraud prevention platform, backed by decades of data stewardship and AI innovation, is the trusted choice for enterprises seeking scalable, compliant, and transparent identity verification solutions.

by Laura Davis 6 min read December 8, 2025

Learn how you can mitigate e-commerce fraud with identity verification and fraud prevention best practices.

by Theresa Nguyen 6 min read December 3, 2025